首页> 外文OA文献 >Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex
【2h】

Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex

机译:人类听觉皮层中自然视听语音和音乐的分布式神经信号

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus(STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.
机译:在对话期间或在听音乐时,听觉和视觉信息会自动组合为视听对象。然而,人们仍然不太了解特定类型的视觉信息如何在逼真的刺激环境中塑造声音的神经处理。在这里,我们应用了多体素模式分析来研究自然匹配的语音,歌声和器乐演奏过程中,自然匹配的视觉输入如何调节肱骨皮质活动。对具有稀疏度先验先验的贝叶斯逻辑回归分类器进行了训练,以预测刺激是视听还是听觉,以及该刺激是否包含钢琴演奏,语音或唱歌。分类器的预测性能通过一次剩下一名参与者使用剩余的15名参与者测试和训练模型来进行测试。与单峰听觉刺激相关的签名模式包括分布的位置,大部分位于中上颞回(STG / MTG)。基于连续声学模型的模式回归分析表明,这些MTG和STG区域中的某些区域的活动与语音和音乐刺激中的声学特征相关。并发视觉刺激调节双侧MTG(语音),右前STG外侧(唱歌)和双侧顶叶皮层(钢琴)的活动。我们的研究结果表明,特定的肱骨上方大脑区域参与处理复杂的自然语音,唱歌和弹钢琴,而位于上方(面部语音)和后方(与音乐相关的手部动作)的上方大脑皮层的其他大脑区域受相关视觉信息的影响。那些上,前上后区分别与刺激识别和感觉运动整合有关。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号